Skip to main content

Last Update: 2025/3/26

Anthropic Messages API

The Anthropic Messages API allows you to generate conversational responses using Anthropic's language models. This document provides an overview of the API endpoints, request parameters, and response structure.

Endpoint

POST https://platform.llmprovider.ai/v1/messages

Request Headers

HeaderValue
x-api-keyYOUR_API_KEY
anthropic-version2023-06-01
Content-Typeapplication/json

Request Body

The request body should be a JSON object with the following parameters:

ParameterTypeDescription
modelstringThe model to use (e.g., claude-3-opus-20240229).
messagesstring[]A list of message objects.
messages.rolestringThe role of the message sender (user, assistant).
messages.contentstringThe content of the message.
max_tokensinteger(Optional) The maximum number of tokens to generate before stopping.
metadataobject(Optional) An object describing metadata about the request.
stop_sequencesarray(Optional) Custom text sequences that will cause the model to stop generating.
streamboolean(Optional)Whether to incrementally stream the response using server-sent events.
systemstring(Optional) A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role.
temperaturenumber(Optional) Amount of randomness injected into the response, between 0 and 1.
tool_choicesobject(Optional) How the model should use the provided tools. The model can use a specific tool, any available tool, or decide by itself.
toolsobject[](Optional) Definitions of tools that the model may use.
top_knumber(Optional) Only sample from the top K options for each subsequent token. Required range: x > 0
top_pnumber(Optional) Use nucleus sampling. Required range:0 < x < 1

Example Request

{
"model": "claude-3-5-sonnet-20241022",
"messages": [
{
"role": "user",
"content": "Hello, Claude!"
}
],
"max_tokens": 1024
}

Response Body

The response body will be a JSON object containing the generated response and metadata.

FieldTypeDescription
idstringUnique identifier for the message.
modelstringThe model that handled the request.
rolestringConversational role of the generated message. This will always be assistant.
contentarrayArray of content objects containing the response.
content.typestringThe type of content, usually text.
content.textstringThe text content of the response.
stop_reasonstringThe reason for stopping the response generation.
stop_sequencesstringThe stop sequences that caused the model to stop generating.
typestringThe type of object returned, usually message.
usageobjectToken usage statistics for the request.

Example Response

{
"id": "msg_123abc",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "Hello! How can I help you today?"
}
],
"model": "claude-3-5-sonnet-20241022",
"usage": {
"input_tokens": 10,
"output_tokens": 8
}
}

Example Request

curl -X POST https://platform.llmprovider.ai/v1/messages \
-H "x-api-key: $YOUR_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-3-5-sonnet-20241022",
"messages": [
{
"role": "user",
"content": "Hello!"
}
]
}'

For more details, refer to the Anthropic API documentation.